Session 1B

Cyber-Physical Systems

Conference
11:00 AM — 12:20 PM HKT
Local
Jun 7 Mon, 11:00 PM — 12:20 AM EDT

ConAML: Constrained Adversarial Machine Learning for Cyber-Physical Systems

Jiangnan Li (University of Tennessee, Knoxville, USA), Yingyuan Yang (University of Illinois Springfield, USA), Jinyuan Sun (University of Tennessee, Knoxville, USA), Kevin Tomsovic (University of Tennessee, Knoxville, USA), Hairong Qi (University of Tennessee, Knoxville, USA)

0
Recent research demonstrated that the superficially well-trained machine learning (ML) models are highly vulnerable to adversarial examples. As ML techniques are becoming a popular solution for cyber-physical systems (CPSs) applications in research literatures, the security of these applications is of concern. However, current studies on adversarial machine learning (AML) mainly focus on pure cyberspace domains. The risks the adversarial examples can bring to the CPS applications have not been well investigated. In particular, due to the distributed property of data sources and the inherent physical constraints imposed by CPSs, the widely-used threat models and the state-of-the-art AML algorithms in previous cyberspace research become infeasible. We study the potential vulnerabilities of ML applied in CPSs by proposing Constrained Adversarial Machine Learning (ConAML), which generates adversarial examples that satisfy the intrinsic constraints of the physical systems. We first summarize the difference between AML in CPSs and AML in existing cyberspace systems and propose a general threat model for ConAML.We then design a best effort search algorithm to iteratively generate adversarial examples with linear physical constraints. We evaluate our algorithms with simulations of two typical CPSs, the power grids and the water treatment system. The results show that our ConAML algorithms can effectively generate adversarial examples which significantly decrease the performance of the ML models even under practical constraints.

EchoVib: Exploring Voice Authentication via Unique Non-Linear Vibrations of Short Replayed Speech

S Abhishek Anand (The University of Alabama at Birmingham, USA), Jian Liu (University of Tennessee, Knoxville, USA), Chen Wang (Louisiana State University, USA), Maliheh Shirvanian (Visa Research, USA), Nitesh Saxena (The University of Alabama at Birmingham, USA), Yingying Chen (Rutgers University, USA)

0
Recent advances in speaker verification and speech processing technology have seen voice authentication being adopted on a wide scale in commercial applications like online banking and customer care support and on devices such as smartphones and IoT voice assistant systems. However, it has been shown that the current voice authentication systems can be ineffective against voice synthesis attacks that mimic a user’s voice to high precision. In this work, we suggest a paradigm shift from the traditional voice authentication systems operating in the audio domain but susceptible to speech synthesis attacks (in the same audio domain).We leverage a motion sensor’s capability to pick up phonatory vibrations, that can help to uniquely identify a user via voice signatures in the vibration domain. The user’s speech is played/echoed back by a device’s speaker for a short duration (hence our method is termed EchoVib) and the resulting non-linear phonatory vibrations are picked up by the motion sensor for speaker recognition. The uniqueness of the device’s speaker and its accelerometer results in a device-specific fingerprint in response to the echoed speech. The use of the vibration domain and its non-linear relationship with audio allows EchoVib to resist the state-of-the-art voice synthesis attacks, shown to be successful in the audio domain. We develop an instance of EchoVib using the onboard loudspeaker and the accelerometer embedded in smartphones, as the authenticator, based on machine learning techniques. Our evaluation shows that even with the low-quality loudspeaker and the low-sampling rate of accelerometer recordings, EchoVib can identify users with an accuracy of over 90%.We also analyze our system against state-of-art-voice synthesis attacks and show that it can distinguish between the morphed and the original speaker’s voice samples, correctly rejecting the morphed samples with a success rate of 85% for voice conversion and voice modeling attacks.We believe that using the vibration domain to detect synthesized speech attacks is effective due to the hardness of preserving the unique phonatory vibration signatures and is difficult to mimic due to the non-linear mapping of the unique speaker and accelerometer response in the vibration domain to the voice in the audio domain.

HVAC: Evading Classifier-based Defenses in Hidden Voice Attacks

Yi Wu (University of Tennessee, Knoxville, USA), Xiangyu Xu (Shanghai Jiao Tong University, China), Payton R. Walker (University of Alabama at Birmingham, USA), Jian Liu (University of Tennessee, Knoxville, USA), Nitesh Saxena (University of Alabama at Birmingham, USA), Yingying Chen (Rutgers University, USA), Jiadi Yu (Shanghai Jiao Tong University, China)

1
Recent years have witnessed the rapid development of automatic speech recognition (ASR) systems, providing a practical voice-user interface for widely deployed smart devices. With the ever-growing deployment of such an interface, several voice-based attack schemes have been proposed towards current ASR systems to exploit certain vulnerabilities. Posing one of the more serious threats, hidden voice attack uses the human-machine perception gap to generate obfuscated/hidden voice commands that are unintelligible to human listeners but can be interpreted as commands by machines. However, due to the nature of hidden voice commands (i.e., normal and obfuscated samples exhibit a significant difference in their acoustic features), recent studies show that they can be easily detected and defended by a pre-trained classifier, thereby making it less threatening. In this paper, we validate that such a defense strategy can be circumvented with a more advanced type of hidden voice attack called HVAC1. Our proposed HVAC attack can easily bypass the existing learning-based defense classifiers while preserving all the essential characteristics of hidden voice attacks (i.e., unintelligible to humans and recognizable to machines). Specifically, we find that all classifier-based defenses build on top of classification models that are trained with acoustic features extracted from the entire audio of normal and obfuscated samples. However, only speech parts (i.e., human voice parts) of these samples contain the useful linguistic information needed for machine transcription. We thus propose a fusion-based method to combine the normal sample and corresponding obfuscated sample as a hybrid HVAC command, which can effectively cheat the defense classifiers. Moreover, to make the command more unintelligible to humans, we tune the speed and pitch of the sample and make it even more distorted in the time domain while ensuring it can still be recognized by machines. Extensive physical over-the-air experiments demonstrate the robustness and generalizability of our HVAC attack under different realistic attack scenarios. Results show that our HVAC commands can achieve an average 94.1% success rate of bypassing machine-learning-based defense approaches under various realistic settings.

Conware: Automated Modeling of Hardware Peripherals

Chad Spensky (University of California, Santa Barbara, USA), Aravind Machiry (University of California, Santa Barbara, USA), Nilo Redini (University of California, Santa Barbara, USA), Colin Unger (University of California, Santa Barbara, USA), Graham Foster (University of California, Santa Barbara, USA), Evan Blasband (University of California, Santa Barbara, USA), Hamed Okhravi (MIT Lincoln Laboratory, USA), Christopher Kruegel (University of California, Santa Barbara, USA), Giovanni Vigna (University of California, Santa Barbara, USA)

1
Emulation is at the core of many security analyses. However, emulating embedded systems is still not possible in most cases. To facilitate this critical analysis, we present Conware, a hardware emulation framework that can automatically generate models for hardware peripherals, which alleviates one of the major challenges currently hindering embedded systems emulation. Conware enables individual peripherals to be modeled, exported, and combined with other peripherals in a pluggable fashion. Conware achieves this by first obtaining a recording of the low-level hardware interactions between the firmware and the peripheral, using either existing methods or our source-code instrumentation technique. These recordings are then used to create high-fidelity automata representations of the peripheral using novel automata-generation techniques. The various models can then be merged to facilitate full-system emulation of any embedded firmware that uses any of the modeled peripherals, even if that specific firmware or its target hardware was never directly instrumented. Indeed, we demonstrate that Conware is able to successfully emulate a peripheral-heavy firmware binary that was never instrumented, by merging the models of six unique peripherals that were trained on a development board using only the vendor-provided example code.

Session Chair

Mu Zhang

Session 2B

Hardware Security (I)

Conference
2:00 PM — 3:20 PM HKT
Local
Jun 8 Tue, 2:00 AM — 3:20 AM EDT

Red Alert for Power Leakage: Exploiting Intel RAPL-Induced Side Channels

Zhenkai Zhang (Texas Tech University, USA), Sisheng Liang (Texas Tech University, USA), Fan Yao (University of Central Florida, USA), Xing Gao (University of Delaware, USA)

0
RAPL (Running Average Power Limit) is a hardware feature introduced by Intel to facilitate power management. Even though RAPL and its supporting software interfaces can benefit power management significantly, they are unfortunately designed without taking certain security issues into careful consideration. In this paper, we demonstrate that information leaked through RAPL-induced side channels can be exploited to mount realistic attacks. Specifically, we have constructed a new RAPL-based covert channel using a single AVX instruction, which can exfiltrate data across different boundaries (e.g., those established by containers in software or even CPUs in hardware); and, we have investigated the first RAPL-based website fingerprinting technique that can identify visited webpages with a high accuracy (up to 99% in the case of the regular network using a browser like Chrome or Safari, and up to 81% in the case of the anonymity network using Tor). These two studies form a preliminary examination into RAPL-imposed security implications. In addition, we discuss some possible countermeasures.

PLI-TDC: Super Fine Delay-Time Based Physical-Layer Identification with Time-to-Digital Converter for In-Vehicle Networks

Shuji Ohira (Nara Institute of Science and Technology, Japan), Araya Kibrom Desta (Nara Institute of Science and Technology, Japan), Ismail Arai (Nara Institute of Science and Technology, Japan), Kazutoshi Fujikawa (Nara Institute of Science and Technology, Japan)

0
Recently, cyberattacks on Controller Area Network (CAN) which is one of the automotive networks are becoming a severe problem. CAN is a protocol for communicating among Electronic Control Units (ECUs) and it is a de-facto standard of automotive networks. Some security researchers point out several vulnerabilities in CAN such as unable to distinguish spoofing messages due to no authentication and no sender identification. To prevent a malicious message injection, at least we should identify the malicious senders by analyzing live messages. In previous work, a delay-time based method called Divider to identify the sender node has been proposed. However, Divider could not identify ECUs which have similar variations because Divider’s measurement clock has coarse time-resolution. In addition, Divider cannot adapt a drift of delay-time caused by the temperature drift at the ambient buses. In this paper, we propose a super fine delay-time based sender identification method with Time-to-Digital Converter (TDC). The proposed method achieves an accuracy rate of 99.67% in the CAN bus prototype and 97.04% in a real-vehicle. Besides, in an environment of drifting temperature, the proposed method can achieve a mean accuracy of over 99%.

HECTOR-V: A Heterogeneous CPU Architecture for a Secure RISC-V Execution Environment

Pascal Nasahl (Graz University of Technology, Austria), Robert Schilling (Graz University of Technology, Austria), Mario Werner (Graz University of Technology, Austria), Stefan Mangard (Graz University of Technology, Austria)

0
To ensure secure and trustworthy execution of applications in potentially insecure environments, vendors frequently embed trusted execution environments (TEE) into their systems. Applications executed in this safe, isolated space are protected from adversaries, including a malicious operating system. TEEs are usually build by integrating protection mechanisms directly into the processor or by using dedicated external secure elements. However, both of these approaches only cover a narrow threat model resulting in limited security guarantees. Enclaves nested into the application processor typically provide weak isolation between the secure and non-secure domain, especially when considering side-channel attacks. Although external secure elements do provide strong isolation, the slow communication interface to the application processor is exposed to adversaries and restricts the use cases. Independently of the used approach, TEEs often lack the possibility to establish secure communication to peripherals, and most operating systems executed inside TEEs do not provide state-of-the-art defense strategies, making them vulnerable to various attacks. We argue that TEEs, such as Intel SGX or ARM TrustZone, implemented on the main application processor, are insecure, especially when considering side-channel attacks. In this paper, we demonstrate how a heterogeneous multicore architecture can be utilized to realize a secure TEE design. We directly embed a secure processor into our HECTOR-V architecture to provide strong isolation between the secure and non-secure domain. The tight coupling of the TEE and the application processor enables HECTOR-V to provide mechanisms for establishing secure communication channels between different devices. We further introduce RISC-V Secure Co-Processor (RVSCP), a security-hardened processor tailored for TEEs. To secure applications executed inside the TEE, RVSCP provides hardware enforced control-flow integrity and rigorously restricts I/O accesses to certain execution states. RVSCP reduces the trusted computing base to a minimum by providing operating system services directly in hardware.

CrypTag: Thwarting Physical and Logical Memory Vulnerabilities using Cryptographically Colored Memory

Pascal Nasahl (Graz University of Technology, Austria), Robert Schilling (Graz University of Technology, Austria), Mario Werner (Graz University of Technology, Austria), Jan Hoogerbrugge (NXP Semiconductors Eindhoven, Netherlands), Marcel Medwed (NXP Semiconductors, Austria), Stefan Mangard (Graz University of Technology, Austria)

0
Memory vulnerabilities are a major threat to many computing systems. To effectively thwart spatial and temporal memory vulnerabilities, full logical memory safety is required. However, current mitigation techniques for memory safety are either too expensive or trade security against efficiency. One promising attempt to detect memory safety vulnerabilities in hardware is memory coloring, a security policy deployed on top of tagged memory architectures. However, due to the memory storage and bandwidth overhead of large tags, commodity tagged memory architectures usually only provide small tag sizes, thus limiting their use for security applications. Irrespective of logical memory safety, physical memory safety is a necessity in hostile environments prevalent for modern cloud computing and IoT devices. Architectures from Intel and AMD already implement transparent memory encryption to maintain confidentiality and integrity of all off-chip data. Surprisingly, the combination of both, logical and physical memory safety, has not yet been extensively studied in previous research, and a naïve combination of both security strategies would accumulate both overheads. In this paper, we propose CrypTag, an efficient hardware/software co-design mitigating a large class of logical memory safety issues and providing full physical memory safety. At its core, CrypTag utilizes a transparent memory encryption engine not only for physical memory safety, but also for memory coloring at hardly any additional costs. The design avoids any overhead for tag storage by embedding memory colors in the upper bits of a pointer and using these bits as an additional input for the memory encryption. A custom compiler extension automatically leverages CrypTag to detect logical memory safety issues for commodity programs and is fully backward compatible. For evaluating the design, we extended a RISC-V processor with memory encryption with CrypTag. Furthermore, we developed a LLVM-based toolchain automatically protecting all dynamic, local, and global data. Our evaluation shows a hardware overhead of less than 1 % and an average runtime overhead between 1.5 % and 6.1 % for thwarting logical memory safety vulnerabilities on a system already featuring memory encryption. Enhancing a system with memory encryption typically induces a runtime overhead between 5 % and 109.8 % for commercial and open-source encryption units.

Session Chair

Fengwei Zhang

Session 3B

ML and Security (II)

Conference
3:40 PM — 5:00 PM HKT
Local
Jun 8 Tue, 3:40 AM — 5:00 AM EDT

HoneyGen: Generating Honeywords Using Representation Learning

Antreas Dionysiou (University of Cyprus, Cyprus), Vassilis Vassiliades (Research Centre on Interactive Media, Smart Systems and Emerging Technologies, Cyprus), Elias Athanasopoulos (University of Cyprus, Cyprus)

3
Honeywords are false passwords injected in a database for detecting password leakage. Generating honeywords is a challenging problem due to the various assumptions about the adversary’s knowledge as well as users’ password-selection behaviour. The success of a Honeywords Generation Technique (HGT) lies on the resulting honeywords; the method fails if an adversary can easily distinguish the real password. In this paper, we propose HoneyGen, a practical and highly robust HGT that produces realistic looking honeywords. We do this by leveraging representation learning techniques to learn useful and explanatory representations from a massive collection of unstructured data, i.e., each operator’s password database. We perform both a quantitative and qualitative evaluation of our framework using the state-of-the-art metrics. Our results suggest that HoneyGen generates high-quality honeywords that cause sophisticated attackers to achieve low distinguishing success rates.

On Detecting Deception in Space Situational Awareness

James Pavur (Oxford University, United Kingdom), Ivan Martinovic (Oxford University, United Kingdom)

2
Space Situational Awareness (SSA) data is critical to the safe piloting of satellites through an ever-growing field of orbital debris. However, measurement complexity means that most satellite operators cannot independently acquire SSA data and must rely on a handful of centralized repositories operated by major space powers. As interstate competition in orbit increases, so does the threat of attacks abusing these information-sharing relationships. This paper offers one of the first considerations of defense techniques against SSA deceptions. Building on historical precedent and real-world SSA data, we simulate an attack whereby an SSA operator seeks to disguise spy satellites as pieces of debris. We further develop and evaluate a machine-learning based anomaly detection tool which allows defenders to detect 90-98% of deception attempts with little to no in-house astrometry hardware. Beyond the direct contribution of this system, the paper takes a unique interdisciplinary approach, drawing connections between cyber-security, astrophysics, and international security studies. It presents the general case that systems security methods can tackle many novel and complex problems in an historically neglected domain and provides methods and techniques for doing so.

AMEBA: An Adaptive Approach to the Black-Box Evasion of Machine Learning Models

Stefano Calzavara (Università Ca' Foscari Venezia, Italy), Lorenzo Cazzaro (Università Ca' Foscari Venezia, Italy), Claudio Lucchese (Università Ca' Foscari Venezia, Italy)

2
Machine learning models are vulnerable to evasion attacks, where the attacker starts from a correctly classified instance and perturbs it so as to induce a misclassification. In the black-box setting where the attacker only has query access to the target model, traditional attack strategies exploit a property known as transferability, i.e., the empirical observation that evasion attacks often generalize across different models. The attacker can thus rely on the following twostep attack strategy: (i) query the target model to learn how to train a surrogate model approximating it; and (ii) craft evasion attacks against the surrogate model, hoping that they “transfer” to the target model. This attack strategy is sub-optimal, because it assumes a strict separation of the two steps and under-approximates the possible actions that a real attacker might take. In this work we propose AMEBA, the first adaptive approach to the black-box evasion of machine learning models. AMEBA builds on a well-known optimization problem, known as Multi-Armed Bandit, to infer the best alternation of actions spent for surrogate model training and evasion attack crafting.We experimentally show on public datasets that AMEBA outperforms traditional two-step attack strategies.

Stealing Deep Reinforcement Learning Modelsfor Fun and Profit

Kangjie Chen (Nanyang Technological University, Singapore), Shangwei Guo (Nanyang Technological University, Singapore), Tianwei Zhang (Nanyang Technological University, Singapore), Xiaofei Xie (Nanyang Technological University, Singapore), Yang Liu (Nanyang Technological University, Singapore)

1
This paper presents the first model extraction attack against Deep Reinforcement Learning (DRL), which enables an adversary to precisely recover a black-box DRL model only from its interaction with the environment. Model extraction attacks against supervised Deep Learning models have been widely studied. However, those techniques cannot be applied to the reinforcement learning scenario due to DRL models’ high complexity, stochasticity and limited observable information. We propose a novel methodology to overcome the above challenges. The key insight of our approach is that the process of DRL model extraction is equivalent to imitation learning, a well-established solution to learn sequential decision-making policies. Based on this observation, our method first builds a classifier to reveal the training algorithm family of the targeted DRL model only from its predicted actions, and then leverages state-of-the-art imitation learning techniques to replicate the model from the identified algorithm family. Experimental results indicate that our methodology can effectively recover the DRL models with high fidelity and accuracy. We also demonstrate two use cases to show that our model extraction attack can (1) significantly improve the success rate of adversarial attacks, and (2) steal DRL models stealthily even they are protected by DNN watermarks. These pose a severe threat to the intellectual property protection of DRL applications.

Session Chair

Pino Caballero-Gil

Made with in Toronto · Privacy Policy · © 2022 Duetone Corp.